4 research outputs found

    Dynamic virtual machine placement considering CPU and memory resource requirements

    Get PDF
    In cloud data centers, cloud providers can offer computing infrastructure as a service in the form of virtual machines (VMs). With the help of virtualization technology, cloud data centers can consolidate VMs on physical machines to minimize costs. VM placement is the process of assigning VMs to the appropriate physical machines. An efficient VM placement solution will result in better VM consolidation ratios which ensures better resource utilization and hence more energy savings. The VM placement process consists of both the initial as well as the dynamic placement of VMs. In this paper, we are experimenting with a dynamic VM placement solution that considers different resource types (namely, CPU and memory). The proposed solution makes use of a genetic algorithm for the dynamic reallocation of the VMs based on the actual demand of the individual VMs aiming to minimize under-utilization and over-utilization scenarios in the cloud data center. Empirical evaluation using CloudSim highlights the importance of considering multiple resource types. In addition, it demonstrates that the genetic algorithm outperforms the well-known best-fit decreasing algorithm for dynamic VM placement

    Dynamic Tuning for Parameter-Based Virtual Machine Placement

    Get PDF
    Virtual machine (VM) placement is the process that allocates virtual machines onto physical machines (PMs) in cloud data centers. Reservation-based VM placement allocates VMs to PMs according to a (statically) reserved VM size regardless of the actual workload. If, at some point in time, a VM is making use of only a fraction of its reservation this leads to PM underutilization, which wastes energy and, at a grand scale, it may result in financial and environmental costs. In contrast, demand-based VM placement consolidates VMs based on the actual workload's demand. This may lead to better utilization, but it may incur a higher number of Service Level Agreement Violations (SLAVs) resulting from overloaded PMs and/or VM migrations from one PM to another as a result of workload fluctuations. To control the tradeoff between utilization and the number of SLAVs, parameter-based VM placement can allow a provider, through a single parameter, to explore the whole space of VM placement options that range from demand-based to reservation-based. The idea investigated by this paper is to adjust this parameter continuously at run-time in a way that a provider can maintain the number of SLAVs below a certain (predetermined) threshold while using the smallest possible number of PMs for VM placement. Two dynamic algorithms to select a value of this parameter on-the-fly are proposed. Experiments conducted using CloudSim evaluate the performance of the two algorithms using one synthetic and one real workload

    Optimizing virtual machine placement for energy and SLA in clouds using utility functions

    Get PDF
    Abstract Cloud computing provides on-demand access to a shared pool of computing resources, which enables organizations to outsource their IT infrastructure. Cloud providers are building data centers to handle the continuous increase in cloud users’ demands. Consequently, these cloud data centers consume, and have the potential to waste, substantial amounts of energy. This energy consumption increases the operational cost and the CO2 emissions. The goal of this paper is to develop an optimized energy and SLA-aware virtual machine (VM) placement strategy that dynamically assigns VMs to Physical Machines (PMs) in cloud data centers. This placement strategy co-optimizes energy consumption and service level agreement (SLA) violations. The proposed solution adopts utility functions to formulate the VM placement problem. A genetic algorithm searches the possible VMs-to-PMs assignments with a view to finding an assignment that maximizes utility. Simulation results using CloudSim show that the proposed utility-based approach reduced the average energy consumption by approximately 6 % and the overall SLA violations by more than 38 %, using fewer VM migrations and PM shutdowns, compared to a well-known heuristics-based approach
    corecore